This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
Skip to main content
United Kingdom | EN-GB

Add a bookmark to get started

Cortex - Life Sciences Insights

| 10 minutes read

Unlocking the Potential of AI in Life Sciences and Healthcare: what do companies need to consider?

The artificial intelligence (AI) revolution is simultaneously one of the biggest opportunities and challenges facing professionals, consumers and regulators in all industries.  The life sciences and healthcare sectors are no exception.  In fact, because these sectors are concerned with improving quality of life and the betterment of public health, it is arguable that they, more so than other sectors, have both the greatest potential to benefit from the use of AI, but also the greatest risks to consider.

Recent developments involving the use of AI in life sciences and healthcare have the ability to transform how data is analysed, how doctors interact with patients, and how disease is diagnosed and managed.  While there is certainly cause for optimism that AI can improve our collective health, a pragmatic approach to AI use and integration is required by companies wishing to adopt the technology, given the very real issues that the use of AI in these sectors may entail.

Recent AI Developments in Life Sciences and Healthcare

Recent AI developments in the life sciences and healthcare sectors include exciting developments, such as:

  • A new AI tool, “BioGPT”, which achieved human parity in analysing scientific research and extracting relevant data to generate answers to questions.[1]

 

 

  • “Bedrock”, a recently announced suite of AI models that can be used by companies as the building blocks for their own products.[2]

 

 

  • The use of AI in medical notetaking, for example, through platforms such as “Fluency Align” and “Dragon Ambient eXperience” (which has now integrated GPT-4).[3]

 

  • Collaboration between technology companies that aims to use AI to assist doctors in responding to online patient queries in a simple and personalised manner.[4]

 

  • “DeepGlioma”, an AI tool capable of diagnosing brain tumour mutations within 90 seconds with 93% accuracy.[5]

     
  • A non-invasive decoder that makes use of AI technology to generate text from brainwaves.[6]

 

  • The passing by the US Senate of the FDA Modernization Act 2.0, which allows the use of alternatives to animal testing in drug and biological development, including in silico testing and computer modelling.[7]  It has even been suggested that AI could be used to generate digital mice for use in predictive analysis.[8]

 

Undoubtedly, AI technologies can make real and practical differences to drug development and patient care, for example, by freeing doctors from typing notes during consultations and allowing a more patient-focused approach, or removing the requirement for clinicians to recall conversations with patients that happened hours earlier.  Within the legal realm too, these tools can be important.  For instance, AI could be used to avoid delayed, missed or inaccurate diagnoses, which might reduce patient claims for negligence.  Another example is the use of AI to identify novel drugs and methods of treatment that may be patentable. As we recently reported, GPT-4 has great potential to assist in drug discovery.

While optimism about the future of AI is justified, a realistic approach to its adoption and use, as well as a keen awareness of its limitations as well as its capabilities, is key, and there are many questions still to be considered in relation to the risks posed by this ground-breaking technology.  In the life sciences and healthcare sectors, where a mistake by AI can have significant, human consequences, the stakes are high for companies to find solutions to the known challenges presented by AI and to pre-empt the challenges yet to arise.

Some key Issues in AI Implementation

Accuracy

One of the primary concerns regarding AI use is that the information it generates can appear quite plausible, but can at times be inaccurate or factually wrong (a phenomenon referred to as “AI hallucination”).  This is particularly concerning in the life sciences and healthcare fields.  While it is now well-documented that generative AI language models like ChatGPT can pass medical licensing exams,[9] the accuracy of responses to examination questions does not necessarily translate to accuracy in responding to real life clinical situations.  

 

A recent Stanford University study asked GPT-4 64 clinical questions, such as “In patients at least 18 years old who are prescribed ibuprofen, is there any difference in peak blood glucose after treatment compared to patients prescribed acetaminophen?”.  According to preliminary results, the study found that almost 60% of GPT-4’s answers did not agree with the answers given by medical specialists.  Moreover, the study found that when the same question was asked multiple times over a number of days, the answers returned by GPT-4 varied considerably, [10] suggesting that the technology is not yet sophisticated enough to be implemented in a clinical context (at least not without stringent human oversight).

 

The use of AI in medical notetaking presents similar issues.  While AI presents great potential in this context, as described in the Recent Developments section above, the challenges must be recognised.  A visit to the doctor is usually not a simple conversation.  An AI notetaker must not only filter out irrelevant small talk, it also needs to contend with a conversation where one person might speak over or interrupt the other, and which may include non-verbal communication (such as a nod or gesture to a particular body part).  All of these, plus other factors, present challenges to the accuracy of AI in medical notetaking, again emphasising the need for users to be conscious of the limitations of the technology and to implement strategies to adapt to these limitations.  For example, users should ensure that the use of, and output generated by AI tools is always subject to human oversight.[11]

 

Even as the accuracy of AI continues to increase, human oversight of AI tools, particularly in the life sciences and healthcare sectors, will continue to be of paramount importance, and a delicate balance must be struck by users, between exploring the potential of AI, and being ever-conscious of its risks and limitations.
 

Scientific, Sex and Racial Bias

AI is generally only as good as the scientific data it is trained on, and if that data is biased, the AI output may be too.  For example, BioGPT is trained on published biomedical literature.[12]  If this literature is biased, the AI software may inadvertently perpetuate these biases via its output.  

 

Unfortunately, as has been recognised, medical literature is often biased.  When it comes to sex, a study of medical textbooks revealed that images of male bodies were used three times as often as images of female bodies to illustrate “neutral” body parts (i.e. body parts that are present in both men and women).[13] An AI algorithm trained on images that overwhelmingly reflect the male body as the “norm” might easily make the mistake, for example, that a woman’s hand is abnormally small because the data on which the AI is relying teaches, erroneously, that “normal” hands are larger.  

 

Similarly, for years, women have been under-represented in clinical trials.[14]  The lack of data available regarding sex-specific diagnosis and treatment for disease in women means that advances in AI are less likely to translate to advances in life sciences and healthcare for women as AI programs are trained on existing data.  

 

Racial bias can also be an issue, and just as AI algorithms trained on existing scientific data risk perpetuating existing scientific (and sex-based) biases, AI algorithms trained on racially-biased data sets risks racially-biased output.  For example, where AI algorithms underly applications that are designed to make decisions (known as automated decision-making), a biased algorithm could lead to biased and even discriminatory decision-making, which could have an undesirable human impact, depending on the criticality of the relevant decision.  A 2019 study found racial bias in an algorithm widely used to guide health decisions in the United States.[15] 

 

It is clear that making use of AI in life sciences and healthcare can significantly improve health outcomes.  However, the examples above are just some of the matters companies need to carefully consider when deciding whether and how to adopt AI technologies.  In particular, companies that wish to adopt AI technologies that leverage their own data sets as the source of learning for the technology need to give particular consideration to the way in which that data will be used to train the algorithm – how will the data be selected/excluded, what are the inherent biases or inaccuracies (if any) in that data, and is it confidential or proprietary?  Considering these matters at the data selection and training stage will help to avoid further perpetuating and amplifying inequalities so that the true power of AI can be realised. 
 

Regulatory Considerations – Industry Leads the Way

Increasingly, AI developers, governments and industry are recognising that AI tools, and use of AI (including in the life sciences and healthcare sectors) must be regulated.  Recently, we reported that the FDA’s Centre for Drug Evaluation and Research released a discussion paper on AI in drug manufacturing that identified several regulatory considerations important to policy development, including:

  • whether and how the application of AI in pharmaceutical manufacturing is (already) subject to regulatory oversight;
  • standards for developing and validating AI models; and
  • continuously learning AI systems that adapt to real-time data may challenge regulatory assessment and oversight.

While this paper specifically considers drug manufacturing, these considerations apply equally to AI use in life sciences and healthcare generally, and serve as useful starting points to examine both the current regulatory framework, and what this framework should look like in 10, 20 or 50 years.

We have also reported extensively on proposed new AI regulations in the European Union (for example, see here, here and here).  While there is currently no comprehensive legislative regulation of AI in Australia, the Australian Federal government has recently indicated it is considering targeted regulation, including the potential for a risk-based classification system to rate AI tools and impose tiered levels of restriction depending on the danger presented by a given AI system.  In the meantime, however, industry, both nationally and internationally, is leading the way.

Many global pharmaceutical companies have published statements and policies governing their ethical and transparent use of AI.  Particularly noticeable is the collective understanding that despite the significant upsides that AI presents, a long-term framework that adequately considers explainability, accountability, inclusivity, equality (both in terms of benefiting from AI output, and reducing bias in AI input) and data security/privacy is paramount.  Similar policies have been published by medical device companies, as well as some of the world’s largest and most well-known technology companies.

It is apparent that the imperatives expressed in these AI policies are attempting to address many of the issues outlined in this article.  For example, one global life sciences company policy points out the need for data use to include processes to identify, prevent, and offset poor quality, incomplete, or inaccurate data, and highlights that there may otherwise be a risk of bias or harm to individuals.  Another life sciences company, in an attempt to address scientific, sex and racial bias, has committed to ensuring its use of AI is sensitive to social, socio-geographic and socio-economic issues, and to protecting against discrimination or negative bias.  A third life sciences company tackles the question of responsibility by promising to maintain human accountability in decision-making processes of designing, delivering and operating AI systems. 

There is widespread acknowledgement in the life sciences and healthcare industries of the need for such policies, and for AI regulation generally.  As discussed in DLA Piper’s At The Intersection of Science and Law podcast, these policies are commendable, but they are only the beginning.  The crucial next step is the implementation of these policies (for instance, in AI design) and managing the changing AI landscape effectively.

Looking Forward

The Australian (and global) community as a whole is becoming increasingly aware of both the benefits and risks that AI use presents.  As already noted, the Australian government is seeking submissions on governance mechanisms, including regulations, standards, tools, frameworks, principles and business practices, with the goal of ensuring AI is developed and used safely and responsibly in Australia.  Non-government institutions have also sought to increase public awareness and promote discussion regarding the state of AI governance in Australia (for example, see the recent comprehensive report by the Human Technology Institute at the University of Technology Sydney). 

If recent advances are anything to go by, the use of AI in life sciences and healthcare is going to revolutionise the way we develop pharmaceuticals and medical devices, treat patients, take medical notes, answer medical questions, and diagnose and manage illness.  It is clear that these technologies have the power to benefit humanity enormously, and their use should be encouraged where appropriate, but care is required.  AI is not perfect, and can make mistakes with potentially significant consequences.  While regulation may currently be lacking in many countries, including Australia, companies should expect that governments will lay down rules in the near future.  In the meantime, industry is at the forefront of trying to meaningfully identify and resolve the challenges posed by AI.  Not only will those companies be in a prime position to deal with impending regulation, but their policies and creative solutions will likely influence and guide the design of the future regulatory landscape. 

Find Out More

For more information on AI and the emerging legal and regulatory standards, contact the authors or your usual DLA Piper contact, or find out more at DLA Piper’s focus page on AI.

You can find a more detailed guide on the AI Regulation and what’s in store for AI in Europe in DLA Piper’s AI Regulation Handbook.

To assess your organisation’s maturity on its AI journey (and check where you stand against sector peers) you can use DLA Piper’s AI Scorebox tool.

You can find more on AI, technology and the law at Technology’s Legal Edge, DLA Piper’s tech sector blog.

 

Resources: 

[1] https://twitter.com/MSFTResearch/status/1618647707135918088

[2] https://aws.amazon.com/bedrock/

[3] https://www.statnews.com/2023/04/19/amazon-3m-health-artificial-intelligence-bedrock/?utm_campaign=health_tech&utm_medium=email&_hsmi=255020478&_hsenc=p2ANqtz-_T7d1ODxd3omzQHOHZKWU1ixApusAUogZePPqk9vF5by-4u4i2Tf81WP8V7uZCS10wEav5l7lXdYcqZwfM6w0PZgxMPqwLaa-MRlnu8pCnKYXx3XE&utm_content=255020478&utm_source=hs_email

https://www.statnews.com/2023/03/20/microsoft-nuance-gpt4-dax-chatgpt/

[4] https://www.statnews.com/2023/04/20/microsoft-epic-generative-ai/?utm_campaign=health_tech&utm_medium=email&_hsmi=255020478&_hsenc=p2ANqtz-_HR3Rum1AZ9Kh7skw_1X5jU309fwPP1MCXTBB13R_jtG7s5qzoKxUAFUnEAfZ1aSP7x88d_EmKR7DzEdOshFaitap5UVhZ5uHwLf9FW193KBIR3Vo&utm_content=255020478&utm_source=hs_email

[5] https://www.nature.com/articles/s41591-023-02252-4

[6]  https://www.nature.com/articles/s41593-023-01304-9

[7] https://www.congress.gov/bill/117th-congress/senate-bill/5002/text

[8] https://www.clinicaltrialsarena.com/features/predictive-analytics-drug-development/

[9] https://www.abc.net.au/news/science/2023-01-12/chatgpt-generative-ai-program-passes-us-medical-licensing-exams/101840938

[10] https://hai.stanford.edu/news/how-well-do-large-language-models-support-clinician-information-needs

https://www.statnews.com/2023/03/28/hospitals-ai-artificial-intelligence-health-microsoft-nuance-notes/?utm_campaign=health_tech&utm_medium=email&_hsmi=252035672&_hsenc=p2ANqtz-8u-njZkdYORbKkd9MrEkRyaAvdA7bIvzjbjsvWegep8bNQ9JCDLVXxcjmCMT0MJnVsAh0NJ8qNsCCX6Go8ZG0uHu-JNEBBNqtbvfJKGkUeUUIaEiM&utm_content=252035672&utm_source=hs_email

[12] https://academic.oup.com/bib/article/23/6/bbac409/6713511

[13] The Drugs Don’t Work – reference 4

 

[14] Caroline Criado Perez, Invisible Women (Vintage, 2020), p 200.

 

[15] https://www.science.org/doi/10.1126/science.aax2342